-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
content: Add attested build environments level requirements #1051
content: Add attested build environments level requirements #1051
Conversation
✅ Deploy Preview for slsa ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The SLSA build track does not have a requirement for completeness of resolved dependencies as of today (L3). If feels like we should work to achieve that completeness before we further harden the build environment. It may be possible to add complete provenance alongside the other changes, but that feels like too much change from L3 to L4.
Software releases needing assurances about the integrity of the environment | ||
used to create the release (e.g., specific compute platform, pre-build | ||
tamper detection). | ||
Build L4 usually requires significant changes to existing build platforms. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are no prior examples to be able to assess the truthfulness of this statement. Significant changes could be required for any increased level.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair point. It's true that L3 already has a very similar statement, and we still wanted to be explicit about the fact that this L4 would require additional significant changes on top of L3. We can be more precise in what we mean here: for example, one of the significant requirements is hardware with very particular features (e.g., TPM or TEE support). Would that be more helpful here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sympathetic to view that we can just repeat the language from L3 (unless we also want to rewrite the L3 'intended for' section). Maybe with the caveat "significant changes to existing L3 builds platforms"?
I think the requirements below do a fine job of getting into the details.
docs/spec/v1.1/levels.md
Outdated
- SHOULD verify the build platform's attestations prior to a build and | ||
produce a [verification summary] about the check. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would this look like? I would expect that the build system would create a build environment attestation on the artifact. I feel like it is a lot to ask for a producer to verify it and publish the VSA. Or does this not have to be a human/manual process?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main goal of the proposed requirements is to make tampering prior to a build detectable. That is, at any point from when a build image is itself created/released to the point where a VM is deployed and waiting for a new build job to come in. This means the build platform will actually generate its attestations before and independently of the artifact. We have a figure showing this sequence which I think would be helpful in clarifying things.
I'll note that the point of using hardware-based integrity measurements and attestation for this is to reduce the amound of manual self-attestation and verification that needs to happen on the part of the build platform and producer. I'll also note that this requirement is a SHOULD, so producer's aren't strictly required to check these attestations.
docs/spec/v1.1/levels.md
Outdated
produce a [verification summary] about the check. | ||
|
||
- Build platform: | ||
- Each build image (i.e., VM or container) made available to software |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The VMs must be built on a SLSA Build L3+ platform as well? What does this mean in practice?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we're using the creation process of the VM image to bootstrap a first degree of trust in the build environment. Today we're already placing trust in GHA to configure an L3 platform, so say GHA VM images were themselves built on GHA, this requirement would give us assurances that the VM images were built with the same L3 integrity. It's a bit recursive, much like building gcc with gcc is, but the guarantees provided by the rest of the requirements become significantly weakened if we didn't have integrity for the build environment's build process. This is because the Provenance of the VM image gives us the good known value that can be later check against when the VM is deployed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should also add that the conditions for meeting requirement are meant to be binary. That is, L4 is achieved iff the VM image is built on a SLSA Build L3+ platform. We believe this is necessary to get around the issue of resolving what the transitive SLSA level would be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This text also presupposes that all builders run in VMs. Are we categorically rejecting remote attestations from physical TPMs on non-virtual machines?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@deeglaze Can you please clarify, when you say non-VM, do you mean that a build could be running inside a container (backed by a physical TPM), or a bare metal environment, or either? There are a few reasons why we're preferring VM-based environments, but I'd like to track the discussion around other settings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean a task that is launched by a trusted orchestration daemon that establishes some resource limitations and filesystem access restrictions as Linux allows, but not necessarily following an OCI container description. At Google we have build servers that only run builds but still run on Borg. All the node software is accounted for with Titan TPMs. There’s no strong reason for us to require builds to run in VMs since we have everything measured and can account for the machine access by the identity and access management system and (measured) software-enforced ACLs. The build environment is managed by this production identity system before getting to the build job that then measures the inputs for slsa L3 but without incorporating the measurement into pcrs.
The hardware attestation of prodID and software attestation by borg Id (BCID) protect our build environment integrity in an auditable manner, but without incorporating the entirety of our production ecosystem in the build attestation. The new slsa level should allow for ecosystem measurements to be held back if the operational security and physical security of the servers can ve attested to implicitly with the slsa signing key release mechanism.
Now if you’re saying that the build ecosystem integrity is only a means to an end of requiring the build to be run within confidential computing technologies, I’d say that is not the most important goal to reach. If you want the build ecosystem to satisfy holistic properties like “no humans may access the environment” then you have to talk about more security commitments of the software, not just its measurement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the detailed explanation!
docs/spec/v1.1/levels.md
Outdated
producers MUST be built on a SLSA Build L3+ platform. The generated | ||
SLSA Provenance MUST be distributed to allow for independent | ||
verification. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The provenance for what must be distributed the build image? This is already a requirement for build L1-3: https://slsa.dev/spec/v1.0/requirements#distribute-provenance . Is the intention for this provenance to be verified somehow?
docs/spec/v1.1/levels.md
Outdated
- Distribution of SLSA Provenance for pre-installed software within the | ||
build image MAY be best-effort. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would this distribution happen? Are we trying to add recursive SLSA into this requirement? Is that too much to do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We aren't trying to add recursive SLSA in this requirement. The intent here is to clarify that while SLSA Provenance is required for the build image creation, SLSA Provenance for pre-installed software (e.g., the Linux kernel, packages etc) is not expected of the build platform.
docs/spec/v1.1/levels.md
Outdated
- The boot process of each build environment MUST be measured and | ||
attested using a [TCG-compliant measured boot] mechanism. The | ||
attestation MUST be authenticated and distributed for independent | ||
verification. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These changes are proposed to be part of the build track but we are now adding additional required attestations into the mix (this attestation + the producer's VSA mentioned earlier). I know that we do not have a 1:1 relationship between attestations and tracks, but this seems to be muddling the simplicity of the build track. Would this proposal be better as its own track?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You raise a valid point about preserving the simplicity build track, and the types of attestations being a part of that. The intent here is to actually encapsulate the hardware-based attestations inside an in-toto attestation, to at least ensure we remain within the SLSA attestation model. So at Build L4 there would be two types of in-toto attestations that would be generated: SLSA Provenance and SCAI encapsulating the hardware-based attestations. We think this approach helps us keep the attestation types limited to a select few. I'll add a TODO item to include examples of these attestations to illustrate what we would expect, and that'll hopefully clarify some of these questions.
On the question of converting this workstream into its own track, our original proposal went down that path. But we got some very convincing feedback from some in the community that that would actually introduce more complexity than is necessary in SLSA overall when the Build Track already pertains to properties of the build platform.
docs/spec/v1.1/levels.md
Outdated
- Read-write block devices or file system paths MUST be encrypted | ||
with a key that is only accessible within the build image. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be included in the attestation?
Can you clarify the presence of the key? I assume that you are not trying to say that the encryption key should be present in the image used to run the VM/container as then it would be reused for multiple running instances of the build.
docs/spec/v1.1/levels.md
Outdated
- Before launching a new environment based on a build image (i.e., VM | ||
or container instance), its SLSA Provenance MUST be verified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What are the criteria for verification here? Is it just the identity or is there more verification needed than that? Are we supposed to verify that it has met some SLSA level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The main aspect to verify here is the identity/hash of the build image against what's recorded in the Provenance. Checking the SLSA level of the Provenance here gives extra assurances about the trustworthiness of the hash. In practice, this check would follow the standard verification flow. I'll revise this requirement to make the intent clearer.
docs/spec/v1.1/levels.md
Outdated
- Before launching a new environment based on a build image (i.e., VM | ||
or container instance), its SLSA Provenance MUST be verified. | ||
- Before making a build environment available for a build request: | ||
- The boot process and state of disk image MUST be verified, and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This verification feels different than the previously mentioned one. Is it?
Signed-off-by: Marcela Melara <marcela.melara@intel.com>
Signed-off-by: Marcela Melara <marcela.melara@intel.com>
Signed-off-by: Marcela Melara <marcela.melara@intel.com>
Co-authored-by: Dionna Amalie Glaze <drdeeglaze@gmail.com> Signed-off-by: Marcela Melara <marcela.melara@intel.com>
e23f591
to
7f55357
Compare
docs/spec/v1.1/levels.md
Outdated
produce a [verification summary] about the check. | ||
|
||
- Build platform: | ||
- Each build image (i.e., VM or container) made available to software |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recommend removing these requirements. In my mind, the requirement should be relatively simple: there is a hardware-backed attestation (SEV-SNP, TDX, or equivalent) attesting to:
- the entire initial state of the build environment (bootloader, kernel, filesystem, etc)
- all of the inputs to the build
The properties of those inputs, such as the VM, seem outside the scope. There are many inputs to the build, and I would assume that we should verify each of those separately as part of some "transitive SLSA" verification.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, I realized looking at this again more recently that this list really enumerates low-level mechanisms for achieving the high-level properties, rather than stating those high-level properties. My thinking is that a lot of the content currently here would ultimately move to the requirements.md description.
The properties of those inputs, such as the VM, seem outside the scope. There are many inputs to the build, and I would assume that we should verify each of those separately as part of some "transitive SLSA" verification.
I generally agree with the notion that inputs/dependencies to the build should be verified separately. At the same time, the properties of the VM specifically are needed to check the initial state of the build environment. That is, in order for the platform or a strict producer to verify the initial state of the build environment it needs to have good known reference values to check against. The SLSA L3 Provenance for the build image's build provides this good known value, with strong integrity assurances, that can then be bootstrapped to gain trust in the integrity of the initial state of the build environment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is a hardware-backed attestation (SEV-SNP, TDX, or equivalent) attesting to ... all of the inputs to the build
@MarkLodato Thinking about this some more as I reformulate the requirements. What do you mean by "inputs to the build"? Is this about the completeness of external parameters in the Provenance? If so, one way to accomplish this requirement is to sign the Provenance itself with the hardware root of trust. Is this what you have in mind?
- Runtime changes to the build environment's disk image SHOULD be | ||
observable at runtime by the executing build request. | ||
|
||
<dt>Benefits<dd> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An alternate framing would be that it removes almost all of the build platform from the root of trust. All that's left is the hardware vendor and physical access.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At a high level, this is true, and we should probably keep this section more concise than it currently is. We did want to capture some of the nuance of doing this practice, though. For example, you actually need to still place a fair amount of trust in the cloud provider that's hosting the VMs running the builds to not tamper with components like the host OS or the vTPM implementation being used to check the integrity of the build environment. The other part we wanted to emphasize is the machine-checkable aspect of relying on the attestable hardware, compared to the expectations for verification at L3. Maybe this nuance doesn't need to be covered in such detail here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see the importance of capturing the nuance. I think what Mark might be getting at is that the benefits as currently enumerated are a bit abstract.
E.g. "Greatly reduces trust in a hosted build platform by increasing observability into the level of integrity of the build environment."
Could this instead be something like "Greatly reduces the TCB of a hosted build platform by preventing tampering with the build execution environment, leaving only the OS and vTPM in scope" or something like that?
Signed-off-by: Marcela Melara <marcela.melara@intel.com>
Signed-off-by: Marcela Melara <marcela.melara@intel.com>
Signed-off-by: Marcela Melara <marcela.melara@intel.com>
Signed-off-by: Marcela Melara <marcela.melara@intel.com>
Signed-off-by: Marcela Melara <marcela.melara@intel.com>
- MUST generate and distribute attestations to good known integrity | ||
measurements of the entire initial state of the build environment | ||
(i.e., VM/container image, boot process and filesystem). All | ||
attestations MUST be authenticated by a hardware root of trust. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We may want to clarify "hardware" here. A vTPM is not "hardware" but is a software implementation of TPM that appears to the VM as if it were hardware.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's actually a note at the end of the requirements clarifying that virtual hardware also works. Do you think that's enough, or is it worth clarifying up front?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a TEE featuring a vTPM there should still be a hardware root of trust. The vTPM would be linked to signed HW evidence, for by example by mixing a hash of the vTPMs AKpub into the report-data field of a TEE's attestation report.
So, the original statement sounds fine to me. I assume a HW RoT is key and not something that can be supplanted by sw.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do wonder if there's a useful middle ground where vTPMs that aren't hardware backed can still provide meaningful value. If this level requires full hardware backed support would that be too big a step?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think another potential middle ground is a HSM where authentication requires certain authentication requirements (whether secure/measured boot, validating PCRs, some validated token, etc.)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case I'd explicitly state here that "software vTPMs" are valid too. sth like:
All attestations MUST be authenticated a root of trust. A valid of root of trust might be anchored in a dedicated HW module or might be formed via a chain of cryptographic signatures.
The latter would be referring to vTPMs that are provisioned by a cloud service provider on tenant VMs and for which you'd get a CSP-provided cert-chain to verify.
<td> | ||
|
||
The build platform generated an authenticated attestation to the integrity | ||
of the entire initial state of the build environment (i.e., VM/container |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I would separate VM and container images. VM image should always be attested while the container image should be attested if containers are being used. Something like:
(i.e., VM image, kernel, filesystem, and container image if containers are used)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify, you specifically mean the case where a container is used within a VM, right? My intent here was more to cover the scenario in which container-only build images are used. Whether or not containers are sufficient to achieve L3, I think is a related but separate question.
|
||
- Greatly reduces trust in a hosted build platform by increasing | ||
observability into the level of integrity of the build environment. | ||
- Provides machine-checkable, cryptographic, hardware-rooted evidence |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know that "hardware-rooted" has been used a fair bit when describing confidential computing environments, but it is misleading and not representative of the desired property.
- Provides machine-checkable, cryptographic, hardware-rooted evidence | |
The dynamic root of trust for measurement (DRTM) provides machine-checkable and integrity-protected evidence of build environment as authenticated by the DRTM's root of trust. | |
The DRTM should follow industry standards for isolation to be protected from workload- and host-based tampering. |
I'm not sure binding the specification to TCG as the only trustworthy industry standard body is required.
MUST be integrity measured and attested. The boot and disk | ||
attestations MUST be distributed to allow for independent | ||
verification. | ||
- When deploying a new build environment: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these deployment assertions meant to be checkable from each of the build environment's output build attestations, or are these operational attestations for claiming SLSA L4? I can't really prove to folks that I did a check before starting the builder VM unless I'm already in a builder VM, which doesn't seem like the right scope.
OR is this saying that the build image's SLSA provenance MUST be verified as a component of the DRTM's remote attestation for the built artifacts? Is delegating that verification to a VSA already understood as an acceptable verification?
tampered with. | ||
- A unique immutable build environment identifier (e.g., | ||
cryptographic keypair) MUST be generated and cryptographically bound | ||
to the build environment via attestation. This *deploy-time |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this item meant to address the Google doc comment of attestation verification policy-based key provisioning for SLSA L4 build attestation signing keys? If so, I think this could be clearer. I'm not familiar enough with SLSA mechanics to know whether SLSA level is expected to be assigned by a specific cryptographic key usage.
and disk image integrity have been verified, and distributed to | ||
allow for independent verification. | ||
- When accepting a new build request (e.g., GHA build job): | ||
- The build environment's deploy-time attestation and uniqueness of its |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this mean? I imagine that the SLSA L4 attestation signing key would be deleted from memory after use from a previous build job, so yes you'd need to re-establish the attestation with the key management service, but only AFTER the build request has completed. You don't want the key material in memory while an untrusted build process is taking place, even if it is isolated.
Who does the build environment attestation verification? It seems like the build job is accepting itself from this wording. The build requester would not necessarily be the one with an appropriate attestation verification policy to apply before even initiating a build request, but maybe that's what you meant? Is a build request supposed to happen over an attested channel?
- Run-time changes to the build environment's disk image SHOULD be | ||
observable at run-time by the executing build. These changes NEED NOT | ||
be attested. | ||
- Boot, disk, deploy- and request-time attestations MUST be authenticated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think request-time attestation is what's desirable. Although it could be an enhancement for tenant confidentiality, i.e., "I don't want to release my code to a builder I haven't vetted."
For build integrity, we want response-time attestation in order for both input and output resource descriptors to be cryptographically bound to the attestation. This prevents replay attacks given that the attestations are meant to be verified offline (i.e., not during the build process where the environment could respond to a cryptographic challenge for freshness).
attestations MUST be authenticated by a hardware root of trust. | ||
- MUST capture all of the inputs to the build in the Provenance. | ||
- MUST verify the build environment integrity against the build | ||
environment attestations prior to passing control of the build to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you say "prior to", do you mean that there is a separator event between boot to the build executor and the executor running any build request? That's my expectation, but this also seems like there could be a temporal prior, where an attestation verification service must do this verification before providing a ticket (for example) to permit the build request to proceed. I don't think the latter is desirable. See my later comment about response-time attestations.
|
||
| Primary Term | Description | ||
| --- | --- | ||
| Build image | The run-time context within a build environment, such as the VM or container image. Individual components of a build image are provided by both the hosted build platform and tenant, and include the build executor, platform-provided pre-installed guest OS and packages, and the tenant’s build steps and external parameters. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This definition of build image as including parts from a tenant seems contrary to prior uses of the term to mean specifically the image of the build environment.
I would advise checking prior instances of "build image" and disambiguate them with "build environment image" and "tenant build image". A build platform will attest to itself, but it may also mount a container including a tenant's build toolchain for executing a build request, so that's an important distinction to make.
| Primary Term | Description | ||
| --- | --- | ||
| Build image | The run-time context within a build environment, such as the VM or container image. Individual components of a build image are provided by both the hosted build platform and tenant, and include the build executor, platform-provided pre-installed guest OS and packages, and the tenant’s build steps and external parameters. | ||
| Build executor | The platform-provided program dedicated to executing the tenant’s build definition, i.e., running the build, within the build image. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clarify that the build executor must be a measured component of the build environment image.
| --- | --- | ||
| Build image | The run-time context within a build environment, such as the VM or container image. Individual components of a build image are provided by both the hosted build platform and tenant, and include the build executor, platform-provided pre-installed guest OS and packages, and the tenant’s build steps and external parameters. | ||
| Build executor | The platform-provided program dedicated to executing the tenant’s build definition, i.e., running the build, within the build image. | ||
| Build request | The process of assigning a build to a pre-provisioned build environment on a hosted build process. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I always think of requests as messages. Perhaps,
"A user-provided message to the build platform that is used to assign a build to a pre-provisioned build environment on a hosted build process."
But this could be splitting hairs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair point. This isn't really about the exchanged messages, but about the action of assigning and dispatching a tenant's build process to a pre-deployed build environment. More recently, I've been using the term "build environment dispatch" for this step in the build environment's lifecycle, which is hopefully a little clearer.
@mdwood-intel @pdxjohnny PTAL |
<td>Hardware-Attested | ||
<td> | ||
|
||
The build platform generated an authenticated attestation to the integrity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is, generating an in-toto attestation about the entire initial state of the build environment?
We may run into confusion here between attestation and attestation:
The process of vouching for the accuracy of information.
from TCG Glossary, and
An authenticated statement (metadata) about a software artifact or collection of software artifacts.
from SLSA Terminology
It might be worth adding a note to clarify to readers who have familiarity with both terms.
image, kernel, and filesystem) was generated at creation time and verified | ||
at deployment time. The build platform also attested to the build request. | ||
In other words, tampering with the initial state of the build environment | ||
MUST be detectable by the platform itself and the build. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"an the build."... executor? agent?
3. When a new *build request* is made, the platform assigns the request to | ||
a deployed build environment. For SLSA Build L4, the tenant may validate | ||
the measurement of the build environment. | ||
4. Finally, the *build executior* running within the environment executes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4. Finally, the *build executior* running within the environment executes | |
4. Finally, the *build executor* running within the environment executes |
Software releases needing assurances about the integrity of the environment | ||
used to create the release (e.g., specific compute platform, pre-build | ||
tamper detection). | ||
Build L4 usually requires significant changes to existing build platforms. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about:
Build L4 usually requires significant changes to existing build platforms. | |
Build L4 may require significant changes to existing build platforms. |
and then list some of the requirements?
I'd countersign this view. Completeness of resolved dependencies is a very helpful property and I'm not sure it would make sense for that property to be delayed until L5 (it seems 'easier' to achieve than hardware backing?) Taking a quick glance over some of this PR I also think there's enough scope in this topic for a platform security track of its own. E.g. L1 - Provides the attestation to communicate platform trust, but doesn't do much beyond that (similar to build L1 and what we're thinking for source L1) |
Have we considered that there may be other ways, like reproducible builds, to remove trust in the hosted build platform? Would that provide equivalent (possibly better?) guarantees? Does encoding hardware trust as a discrete build level remove that option from folks? Perhaps reproducible builds could listed as a way to achieve these goals? (It would require a fair amount of re-wording...) |
|
||
- Build platform: | ||
- MUST generate and distribute attestations to good known integrity | ||
measurements of the entire initial state of the build environment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm worried that this definition precludes any sort of distributed build from ever meeting SLSA L4 - it seems to assume that a build is always hermetic within a single build environment, and so easily described as host + VM + filesystem + command -> output
. Almost all of the high-trust builds produced by Google, for example, are produced via distributed systems across multiple hosts, where the network, virtual filesystems, etc are inextricably linked to the process. Many could not be produced on any single machine in any reasonable amount of time, if at all. (Needing many TiBs of scratch space in total, many core-days of work, etc.)
I'd have no objection to defining a SLSA framework for hermetic single-host builds, if that is independently useful, but for "L4" where the implicit assumption is that all high-trust builders should aim to achieve it, I'm not yet convinced that this is the correct successor to L3. Though even here, "trust of software" seems still necessary, in that I don't know how to go from a description of a build to a fully realized set of input files without already having to have trusted that some build platform software has interpreted it correctly.
Perhaps the right target is a broadened version of the same: a chain of hardware-rooted build attestations to what software was running on each node leveraged in the build process, with that software in turn attesting what environment it set up locally, etc? This would make it easier to be "tamper evident" for hardware attacks, though still require trusting that the build platform software is sufficiently bug- and backdoor-free.
Failing that, @TomHennen 's comment elsewhere about "independently reproducible" might be a better choice for L4. Though that may also fail on practicality grounds: I'm not sure of many systems that have ever achieved multi-party bitwise reproducibility (I don't think Debian is there yet?), and I don't know what that would look like for closed-source systems that only execute on one build platform in practice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps the right target is a broadened version of the same: a chain of hardware-rooted build attestations to what software was running on each node leveraged in the build process, with that software in turn attesting what environment it set up locally, etc?
@EricBurnett I agree with this target (and I think it's consistent with the spirit - the build environment is potentially distributed). It would be a small change in the next line to make it more explicit -
- (i.e., VM/container image, boot process and filesystem)
+ (i.e., VM/container image, boot process, and filesystem, for each host in the build environment)
I need to catch up to most comments, but want to respond to a few recent ones first:
This question keeps coming up, and I've been thinking about this a LOT recently. Our stance so far has been that reproducible builds and (hardware-)attested build environments are not mutually exclusive, nor is one better than the other. Both approaches reduce trust in the build platform, but they do so in different ways. My thinking is because they're fundamentally solving different problems. The similarities: Reproducible builds gives a producer or consumer assurances that a specific build process is consistent and was very likely not tampered with because you can compare the build results. Relying on trusted hardware can give all parties cryptographically verifiable evidence that the build environment that ran your build wasn't tampered with because you can check the hardware-backed attestations for it. The outcome of using reproducible builds and hardware-attested builds ends up being pretty much the same: "was my build platform tampered with?" The differences: I see the main differences in the trust model. In the former, the build platform at an organizational level doesn't need to be fully trusted. The more distinct, independent rebuilders you use, the stronger your assurances are, assuming enough rebuilders aren't colluding. In the latter, the build platform system doesn't need to be fully trusted. As long as you trust the hardware-based mechanisms, your assurances come from being able to trace a particular build back to a specific compute environment and check the contents of the attestations against the expected good known integrity values for that compute environment's software. Neither fully removes trust from the build platform, but it's greatly reduced wrt different aspects of the platform in both cases.
As I describe above, I think the answer here should be no, because they aren't quite solving the same problem. In fact, we're intentionally vague about reproducible builds because we think there are scenarios where one approach may be enough depending on your threat model and/or adoption requirements (as this #1051 (comment) mentions at the end), and there are scenarios where both may be useful, and we wanted to leave the option open for either scenario. How this is reflected in build levels I think is a separate question. |
independent verification. | ||
- The boot process of each build environment MUST be measured and | ||
attested using a [TCG-compliant measured boot] mechanism. In | ||
addition, the initial state of the build environment's disk image |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are reasons for the disk image to not be entirely measured, so this seems like it could be too coarse of a requirement. There are ways security-critical components of user space are measured. They go past the PCRs 0-7 in the TCG-compliant measured boot link.
This whole disk integrity would disallow content-addressed build caches that already have SLSA attestations, such as what Bazel.build uses for performance.
The fact of the matter is that we have bespoke ways of evaluating attestation evidence and specifying policy for their acceptance. There is no standard since OSes consider different things to be security-critical.
Perhaps, "every file read from disk must be integrity-protected by some means traceable to the build environment identity or build request"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll be closing this PR soon. Let's move this discussion to #1107 !
Ah, I guess how to reflect it in the build levels is what I'm most interested in at the moment. :) The way I'd read defining L4 as requiring hardware attested builds is that it would preclude someone from achieving a similar goal via reproducibility. While we could then define level 5 as reproducible that would seem to require hardware backed attestations when doing reproducible builds. Is that desirable? Is my reading too strict? The way I think about levels generally is that they're additive and single tracked. What we seem to have here is a branch of sorts? I don't know how to handle that within the levels framework. I could see handling it within a specific level "you have choices on how to can reduce trust in the build platform a) hardware backed attestations, b) reproducible builds, c) both a & b". To be clear I do see the distinction in how these two approaches reduce the need to trust the build platform; but I wonder how meaningful that distinction would be for downstream users? Maybe it's a distinction that could be made in whatever attestations are used to communicate level 4-ness? |
Per the 7/22 SLSA spec meeting, these requirements are to be re-worked as a separate track. So, this PR is being superseded by #1115 . |
This (draft) PR introduces the following spec changes associated with #975. Per #975 (comment) and #975 (comment) the spec enhancements are being proposed as a new Build track level. The spec changes introduced in this PR are meant to be complementary to possible requirements being developed in parallel in #977 .
Spec changes:
Part 1 of #975 CC @chkimes